21 research outputs found

    Validation of DARTEL registration technique. Application to Mild Cognitive Impairment and Schizophrenia using Voxel Based Morphometry

    Get PDF
    This research analyzes anatomic variability of brain structures due to Mild Cognitive Impairment and Schizophrenia. This information is coded by spatial transformation between Resonance Magnetic image and a template selected as anatomic reference. To get this information efficiently, DARTEL toolbox, a non lineal register, is studied

    Overview of the ImageCLEFmed 2020 Concept Prediction Task: Medical Image Understanding

    Get PDF
    This paper describes the ImageCLEFmed 2020 Concept Detection Task. After first being proposed at ImageCLEF 2017, the medical task is in its 4th edition this year, as the automatic detection from medical images still remains a challenging task. In 2020, the format remained the same as in 2019, with a single sub-task. The concept detection task is part of the medical tasks, alongside the tuberculosis and visual question and answering tasks. Similar to the 2019 edition, the data set focuses on radiology images rather than biomedical images, however with an increased number of images. The distributed images were extracted from the biomedical open access literature (PubMed Central). The development data consists of 65,753 training and 15,970 validation images. Each image has corresponding Unified Medical Language System (UMLSℱ) concepts, that were extracted from the original article image captions. In this edition, additional imaging acquisition technique labels were included in the distributed data, which were adopted for pre-filtering steps, concept selection and ensemble algorithms. Most applied approaches for the automatic detection of concepts were deep learning based architectures. Long short-term memory (LSTM) recurrent neural networks (RNN), adversarial auto-encoder, convolutional neural networks (CNN) image encoders and transfer learning-based multi-label classification models were adopted. The performances of the submitted models (best score 0.3940) were evaluated using F1-scores computed per image and averaged across all 3,534 test images

    Essex at ImageCLEFcaption 2020 task

    Get PDF
    The University of Essex participated in the fourth edition of the ImageCLEFcaption task which aims to detect concepts on radiology images as an approach to medical image understanding. In this paper, the University of Essex team presents its participation in the ImageCLEF 2020 caption task based on a retrieval based approach for concept detection. A Densely Connected Convolutional Network is used to encode the images. This paper explores compares several modification of the baseline considering several aspects such as the image modality or the selection of concepts among the top retrieved images. The University of Essex was third best team participating in the task achieving a 0.381 mean F1 score, very close to the results obtained by the top two teams. Code and pre-trained models are available at https://github.com/fjpa121197/ImageCLEFmedEssex2020

    Crowdsourcing for Medical Image Classification

    Get PDF
    To help manage the large amount of biomedical images produced, image information retrieval tools have been developed to help access the right information at the right moment. To provide a test bed for image retrieval evaluation, the ImageCLEFmed benchmark proposes a biomedical classification task that automatically focuses on determining the image modality of figures from biomedical journal articles. In the training data for this machine learning task, some classes have many more images than others and thus a few classes are not well represented, which is a challenge for automatic image classification. To address this problem, an automatic training set expansion was first proposed. To improve the accuracy of the automatic training set expansion, a manual verification of the training set is done using the crowdsourcing platform Crowdflower. This platform allows the use of external persons to pay for the crowdsourcing or to use personal contacts free of charge. Crowdsourcing requires strict quality control or using trusted persons but it can quickly give access to a large number of judges and thus improve many machine learning tasks. Results show that the manual annotation of a large amount of biomedical images carried out in this project can help with image classification

    Overview of the ImageCLEFcoral 2020 Task: Automated Coral Reef Image Annotation

    Get PDF
    This paper presents an overview of the ImageCLEFcoral 2020 task that was organised as part of the Conference and Labs of the Evaluation Forum - CLEF Labs 2020. The task addresses the problem of automatically segmenting and labelling a collection of underwater images that can be used in combination to create 3D models for the monitoring of coral reefs. The data set comprises 440 human-annotated training images, with 12,082 hand-annotated substrates, from a single geographical region. The test set comprises a further 400 test images, with 8,640 substrates annotated, from four geographical regions ranging in geographical similarity and ecological connectedness to the training data (100 images per subset). 15 teams registered, of which 4 teams submitted 53 runs. The majority of submissions used deep neural networks, generally convolutional ones. Participants’ entries showed that some level of automatically annotating corals and benthic substrates was possible, despite this being a difficult task due to the variation of colour, texture and morphology between and within classification types

    Deep Learning in Neuroimaging: Effect of Data Leakage in Cross-validation Using 2D Convolutional Neural Networks

    Get PDF
    In recent years, 2D convolutional neural networks (CNNs) have been extensively used to diagnose neurological diseases from magnetic resonance imaging (MRI) data due to their potential to discern subtle and intricate patterns. Despite the high performances reported in numerous studies, developing CNN models with good generalization abilities is still a challenging task due to possible data leakage introduced during cross-validation (CV). In this study, we quantitatively assessed the effect of a data leakage caused by 3D MRI data splitting based on a 2D slice-level using three 2D CNN models to classify patients with Alzheimer’s disease (AD) and Parkinson’s disease (PD). Our experiments showed that slice-level CV erroneously boosted the average slice level accuracy on the test set by 30% on Open Access Series of Imaging Studies (OASIS), 29% on Alzheimer’s Disease Neuroimaging Initiative (ADNI), 48% on Parkinson’s Progression Markers Initiative (PPMI) and 55% on a local de-novo PD Versilia dataset. Further tests on a randomly labeled OASIS-derived dataset produced about 96% of (erroneous) accuracy (slice-level split) and 50% accuracy (subject-level split), as expected from a randomized experiment. Overall, the extent of the effect of an erroneous slice-based CV is severe, especially for small datasets

    An annotated video dataset for computing video memorability

    Get PDF
    Using a collection of publicly available links to short form video clips of an average of 6 seconds duration each, 1,275 users manually annotated each video multiple times to indicate both long-term and short-term memorability of the videos. The annotations were gathered as part of an online memory game and measured a participant’s ability to recall having seen the video previously when shown a collection of videos. The recognition tasks were performed on videos seen within the previous few minutes for short-term memorability and within the previous 24 to 72 hours for long-term memorability. Data includes the reaction times for each recognition of each video. Associated with each video are text descriptions (captions) as well as a collection of image-level features applied to 3 frames extracted from each video (start, middle and end). Video-level features are also provided. The dataset was used in the Video Memorability task as part of the MediaEval benchmark in 2020

    An annotated video dataset for computing video memorability

    Get PDF
    Using a collection of publicly available links to short form video clips of an average of 6 seconds duration each, 1275 users manually annotated each video multiple times to indicate both long-term and short-term memorability of the videos. The annotations were gathered as part of an online memory game and measured a participant’s ability to recall having seen the video previously when shown a collection of videos. The recognition tasks were performed on videos seen within the previous few minutes for short-term memorability and within the previous 24 to 72 hours for long-term memorability. Data includes the reaction times for each recognition of each video. Associated with each video are text descriptions (captions) as well as a collection of image-level features applied to 3 frames extracted from each video (start, middle and end). Video-level features are also provided. The dataset was used in the Video Memorability task as part of the MediaEval benchmark in 2020

    Overview of the MediaEval 2022 predicting video memorability task

    Get PDF
    This paper describes the 5th edition of the \textit{Predicting Video Memorability Task} as part of MediaEval2022. This year we have reorganised and simplified the task in order to lubricate a greater depth of inquiry. Similar to last year, two datasets are provided in order to facilitate generalisation, however, this year we have replaced the TRECVid2019 Video-to-Text dataset with the VideoMem dataset in order to remedy underlying data quality issues, and to prioritise short-term memorability prediction by elevating the Memento10k dataset as the primary dataset. Additionally, a fully fledged electroencephalography (EEG)-based prediction sub-task is introduced. In this paper, we outline the core facets of the task and its constituent sub-tasks; describing the datasets, evaluation metrics, and requirements for participant submissions

    Experiences from the MediaEval predicting media memorability task

    Get PDF
    The Predicting Media Memorability task in the MediaEval evaluation campaign has been running annually since 2018 and several different tasks and data sets have been used in this time. This has allowed us to compare the performance of many memorability prediction techniques on the same data and in a reproducible way and to refine and improve on those techniques. The resources created to compute media memorability are now being used by researchers well beyond the actual evaluation campaign. In this paper we present a summary of the task, including the collective lessons we have learned for the research community
    corecore